7 research outputs found
Extended pipeline for content-based feature engineering in music genre recognition
We present a feature engineering pipeline for the construction of musical
signal characteristics, to be used for the design of a supervised model for
musical genre identification. The key idea is to extend the traditional
two-step process of extraction and classification with additive stand-alone
phases which are no longer organized in a waterfall scheme. The whole system is
realized by traversing backtrack arrows and cycles between various stages. In
order to give a compact and effective representation of the features, the
standard early temporal integration is combined with other selection and
extraction phases: on the one hand, the selection of the most meaningful
characteristics based on information gain, and on the other hand, the inclusion
of the nonlinear correlation between this subset of features, determined by an
autoencoder. The results of the experiments conducted on GTZAN dataset reveal a
noticeable contribution of this methodology towards the model's performance in
classification task.Comment: ICASSP 201
Context-Dependent Acoustic Modeling without Explicit Phone Clustering
Phoneme-based acoustic modeling of large vocabulary automatic speech
recognition takes advantage of phoneme context. The large number of
context-dependent (CD) phonemes and their highly varying statistics require
tying or smoothing to enable robust training. Usually, Classification and
Regression Trees are used for phonetic clustering, which is standard in Hidden
Markov Model (HMM)-based systems. However, this solution introduces a secondary
training objective and does not allow for end-to-end training. In this work, we
address a direct phonetic context modeling for the hybrid Deep Neural Network
(DNN)/HMM, that does not build on any phone clustering algorithm for the
determination of the HMM state inventory. By performing different
decompositions of the joint probability of the center phoneme state and its
left and right contexts, we obtain a factorized network consisting of different
components, trained jointly. Moreover, the representation of the phonetic
context for the network relies on phoneme embeddings. The recognition accuracy
of our proposed models on the Switchboard task is comparable and outperforms
slightly the hybrid model using the standard state-tying decision trees.Comment: Submitted to Interspeech 202
Sample Drop Detection for Distant-speech Recognition with Asynchronous Devices Distributed in Space
In many applications of multi-microphone multi-device processing, the
synchronization among different input channels can be affected by the lack of a
common clock and isolated drops of samples. In this work, we address the issue
of sample drop detection in the context of a conversational speech scenario,
recorded by a set of microphones distributed in space. The goal is to design a
neural-based model that given a short window in the time domain, detects
whether one or more devices have been subjected to a sample drop event. The
candidate time windows are selected from a set of large time intervals,
possibly including a sample drop, and by using a preprocessing step. The latter
is based on the application of normalized cross-correlation between signals
acquired by different devices. The architecture of the neural network relies on
a CNN-LSTM encoder, followed by multi-head attention. The experiments are
conducted using both artificial and real data. Our proposed approach obtained
F1 score of 88% on an evaluation set extracted from the CHiME-5 corpus. A
comparable performance was found in a larger set of experiments conducted on a
set of multi-channel artificial scenes.Comment: Submitted to ICASSP 202